36 research outputs found
Automated Dynamic Firmware Analysis at Scale: A Case Study on Embedded Web Interfaces
Embedded devices are becoming more widespread, interconnected, and
web-enabled than ever. However, recent studies showed that these devices are
far from being secure. Moreover, many embedded systems rely on web interfaces
for user interaction or administration. Unfortunately, web security is known to
be difficult, and therefore the web interfaces of embedded systems represent a
considerable attack surface.
In this paper, we present the first fully automated framework that applies
dynamic firmware analysis techniques to achieve, in a scalable manner,
automated vulnerability discovery within embedded firmware images. We apply our
framework to study the security of embedded web interfaces running in
Commercial Off-The-Shelf (COTS) embedded devices, such as routers, DSL/cable
modems, VoIP phones, IP/CCTV cameras. We introduce a methodology and implement
a scalable framework for discovery of vulnerabilities in embedded web
interfaces regardless of the vendor, device, or architecture. To achieve this
goal, our framework performs full system emulation to achieve the execution of
firmware images in a software-only environment, i.e., without involving any
physical embedded devices. Then, we analyze the web interfaces within the
firmware using both static and dynamic tools. We also present some interesting
case-studies, and discuss the main challenges associated with the dynamic
analysis of firmware images and their web interfaces and network services. The
observations we make in this paper shed light on an important aspect of
embedded devices which was not previously studied at a large scale.
We validate our framework by testing it on 1925 firmware images from 54
different vendors. We discover important vulnerabilities in 185 firmware
images, affecting nearly a quarter of vendors in our dataset. These
experimental results demonstrate the effectiveness of our approach
Recommended from our members
INCHAIN: a cyber insurance architecture with smart contracts and self-sovereign identity on top of blockchain
Despite the rapid growth of the cyber insurance market in recent years, insurance companies in this area face several challenges, such as a lack of data, a shortage of automated tasks, increased fraudulent claims from legal policyholders, attackers masquerading as legal policyholders, and insurance companies becoming targets of cybersecurity attacks due to the abundance of data they store. On top of that, there is a lack of Know Your Customer procedures. To address these challenges, in this article, we present INCHAIN, an innovative architecture that utilizes Blockchain technology to provide data transparency and traceability. The backbone of the architecture is complemented by Smart Contracts, which automate cyber insurance processes, and Self-Sovereign Identity for robust identification. The effectiveness of INCHAIN ’s architecture is compared with the literature against the challenges the cyber insurance industry faces. In a nutshell, our approach presents a significant advancement in the field of cyber insurance, as it effectively combats the issue of fraudulent claims and ensures proper customer identification and authentication. Overall, this research demonstrates a novel and effective solution to the complex problem of managing cyber insurance, providing a solid foundation for future developments in the field
The Art of False Alarms in the Game of Deception: Leveraging Fake Honeypots for Enhanced Security
Abstract—The great popularity of the Internet increases the concern for the safety of its users as many malicious Web pages pop up in daily basis. Client honeypots are tools, which are able to detect malicious Web pages, which aim to infect their visitors. These tools are widely used by researchers and anti-virus companies in their attempt to protect Internet users from being infected. Unfortunately, cyber-criminals are becoming aware of this type of detection and create evasion techniques that allow them to behave in a benign way when they feel to be threatened. This bi-faceted behavior enables them to operate for a longer period, which translates in more profit. Hence, these deceptive Web pages pose a significant challenge to existing client honeypot approaches, making them incapable to detect them when exhibit the aforementioned behavior. In this paper, we mitigate this problem by designing and developing a framework that benefits from this bi-faceted be-havior. Our main goal is to protect users from being infected. More precisely, we leverage the evasion techniques used by cyber-criminals and implement a prototype, called SCARECROW, which triggers false alarms in the cases of deceptive Web pages. Consequently, the users that use SCARECROW for Web surfing can remain protected, even if they visit a malicious Website. We evaluate our implementation against malicious URLs provided by a large anti-virus company and show that when SCARECROW is deployed, malicious Websites with bi-faceted behavior do not launch their attacks against normal users
Ransomware: Notes on the US Computer Fraud and Abuse Act and the CoE International Convention on Cybercrime
It is 2021, and cyberattacks are relentless. Attacks can take many forms, such asransomware, which according to some estimations, accounted for approximately 4000 attacks per day, with 98% of the attacks relying on social engineering. Only in the US, ransomware attacks in 2020 costed an estimated $915 million. This working paper aims to look into the applicable legislative regimes to ransomware from the perspective of the US Computer Fraud and Abuse Act (CFAA) and the Convention on Cybercrime of the Council of Europe (Budapest Convention). In doing so, in Section 2 the paper first describes ransomware, both from a technical perspective as well from the perspective of the novel business model of Ransomware-as-a-service (RaaS).Section 3 is dedicated to applying the CFAA to ransomware, whereas Section 4 does the same for the Budapest Convention. Section 5 brings together some concluding reflections regarding the two legal regimes
Hiding Behind the Shoulders of Giants: Abusing Crawlers for Indirect Web Attacks
It could be argued that without search engines, the web would have never grown to the size that it has today. To achieve maximum coverage and provide relevant results, search engines employ large armies of autonomous crawlers that continuously scour the web, following links, indexing content, and collecting features that are then used to calculate the ranking of each page. In this paper, we describe how autonomous crawlers can be abused by attackers to exploit vulnerabilities on third-party websites while hiding the true origin of the attacks. Moreover, we show how certain vulnerabilities on websites that are currently deemed unimportant, can be abused in a way that would allow an attacker to arbitrarily boost the rankings of malicious websites in the search results of popular search engines. Motivated by the potentials of these vulnerabilities, we propose a series of preventive and defensive countermeasures that website owners and search engines can adopt to minimize, or altogether eliminate, the effects of crawler-abusing attacks
Ransomware: Notes on the US Computer Fraud and Abuse Act and the CoE International Convention on Cybercrime
It is 2021, and cyberattacks are relentless. Attacks can take many forms, such asransomware, which according to some estimations, accounted for approximately 4000 attacks per day, with 98% of the attacks relying on social engineering. Only in the US, ransomware attacks in 2020 costed an estimated $915 million. This working paper aims to look into the applicable legislative regimes to ransomware from the perspective of the US Computer Fraud and Abuse Act (CFAA) and the Convention on Cybercrime of the Council of Europe (Budapest Convention). In doing so, in Section 2 the paper first describes ransomware, both from a technical perspective as well from the perspective of the novel business model of Ransomware-as-a-service (RaaS).Section 3 is dedicated to applying the CFAA to ransomware, whereas Section 4 does the same for the Budapest Convention. Section 5 brings together some concluding reflections regarding the two legal regimes
Towards Automated Classification of Firmware Images and Identification of Embedded Devices
Embedded systems, as opposed to traditional computers,
bring an incredible diversity. The number of devices manufactured is constantly
increasing and each has a dedicated software, commonly known
as firmware. Full firmware images are often delivered as multiple releases,
correcting bugs and vulnerabilities, or adding new features. Unfortunately,
there is no centralized or standardized firmware distribution
mechanism. It is therefore difficult to track which vendor or device a firmware
package belongs to, or to identify which firmware version is used in
deployed embedded devices. At the same time, discovering devices that
run vulnerable firmware packages on public and private networks is crucial
to the security of those networks. In this paper, we address these
problems with two different, yet complementary approaches: firmware
classification and embedded web interface fingerprinting. We use supervised
Machine Learning on a database subset of real world firmware files.
For this, we first tell apart firmware images from other kind of files and
then we classify firmware images per vendor or device type. Next, we
fingerprint embedded web interfaces of both physical and emulated devices.
This allows recognition of web-enabled devices connected to the
network. In some cases, this complementary approach allows to logically
link web-enabled online devices with the corresponding firmware package
that is running on the devices. Finally, we test the firmware classification
approach on 215 images with an accuracy of 93.5%, and the device
fingerprinting approach on 31 web interfaces with 89.4% accuracy.peerReviewe
Digital Forgetting Using Key Decay
During the recent development of information technology and the prevalent breakthroughs of its services, more digital data tend to be readily stored online. Although the massive advantages, there is a pivotal necessity for curating digital data forgetting. Online content can pose perilous threats in terms of privacy and security that may hinder the right to be forgotten, encompassed by the GDPR act, since the released data can be archived and accessed retrospectively. Prior approaches focused on various access heuristics and elastic expiration times to make the data unreachable to some extent. However, there are still many pending issues related to the proposed studies, such as securing ephemeral key storage and co-ownership data deletion. In this paper, we attempt to tackle the problem of storing ephemeral keys during the estimated validity period. Hence, we devise a novel concept called key decay over time, which can achieve the ephemeral existence of the key. The decay idea entails the gradual, irreversible corruption of the key with time passing. In the current work, we combine the concept of gradual time elapsing and corruption into a single notion of the decay rate. Meanwhile, the irreversibility merit formed by randomness and various obfuscation strategies impedes retrospective attacks. Over time, the decay rate will give an estimated range for the key to be destroyed entirely. Finally, we implement and thoroughly assess a proof-of-concept regarding the key decay, including computational complexity and security analysis. Cyber Securit
Revealing the Relationship Network Behind Link Spam
Abstract—Accessing the large volume of information that is available on the Web is more important than ever before. Search engines are the primary means to help users find the content they need. To suggest the most closely related and the most popular web pages for a user’s query, search engines assign a ranking to each web page, which typically increases with the number and ranking of other websites that link to this page. However, link spammers have developed several techniques to exploit this algorithm and improve the ranking of their web pages. These techniques are commonly based on underground forums for collaborative link exchange; building a relationship network among spammers to favor their web pages in search engine results. In this study, we provide a systematic analysis of the spam link exchange performed through 15 Search Engine Optimization (SEO) forums. We design a system, which is able to capture the activity of link spammers in SEO forums, identify spam link exchange, and visualize the link spam ecosystem. The outcomes of this study shed light on a different aspect of link spamming that is the collaboration among spammers. I